Goto

Collaborating Authors

 stop signal


Argus: Resilience-Oriented Safety Assurance Framework for End-to-End ADSs

Wang, Dingji, Lu, You, Chen, Bihuan, Hao, Shuo, Jiang, Haowen, Tian, Yifan, Peng, Xin

arXiv.org Artificial Intelligence

End-to-end autonomous driving systems (ADSs), with their strong capabilities in environmental perception and generalizable driving decisions, are attracting growing attention from both academia and industry. However, once deployed on public roads, ADSs are inevitably exposed to diverse driving hazards that may compromise safety and degrade system performance. This raises a strong demand for resilience of ADSs, particularly the capability to continuously monitor driving hazards and adaptively respond to potential safety violations, which is crucial for maintaining robust driving behaviors in complex driving scenarios. To bridge this gap, we propose a runtime resilience-oriented framework, Argus, to mitigate the driving hazards, thus preventing potential safety violations and improving the driving performance of an ADS. Argus continuously monitors the trajectories generated by the ADS for potential hazards and, whenever the EGO vehicle is deemed unsafe, seamlessly takes control through a hazard mitigator. We integrate Argus with three state-of-the-art end-to-end ADSs, i.e., TCP, UniAD and VAD. Our evaluation has demonstrated that Argus effectively and efficiently enhances the resilience of ADSs, improving the driving score of the ADS by up to 150.30% on average, and preventing up to 64.38% of the violations, with little additional time overhead.


Using Formal Models, Safety Shields and Certified Control to Validate AI-Based Train Systems

Gruteser, Jan, Roßbach, Jan, Vu, Fabian, Leuschel, Michael

arXiv.org Artificial Intelligence

The certification of autonomous systems is an important concern in science and industry. The KI-LOK project explores new methods for certifying and safely integrating AI components into autonomous trains. We pursued a two-layered approach: (1) ensuring the safety of the steering system by formal analysis using the B method, and (2) improving the reliability of the perception system with a runtime certificate checker. This work links both strategies within a demonstrator that runs simulations on the formal model, controlled by the real AI output and the real certificate checker. The demonstrator is integrated into the validation tool ProB. This enables runtime monitoring, runtime verification, and statistical validation of formal safety properties using a formal B model. Consequently, one can detect and analyse potential vulnerabilities and weaknesses of the AI and the certificate checker. We apply these techniques to a signal detection case study and present our findings.


Vegetable Peeling: A Case Study in Constrained Dexterous Manipulation

Chen, Tao, Cousineau, Eric, Kuppuswamy, Naveen, Agrawal, Pulkit

arXiv.org Artificial Intelligence

Having robots perform food preparation tasks has been of great interest in robotics. Imagine the scenario of making mashed potatoes, where a critical step is to peel potatoes. Humans peel potatoes by grasping the potato in one hand and using the second hand to actuate a peeler to remove the potato's skin. After a part of the potato is peeled, it is rotated while being held in the hand (i.e., in-hand manipulation) and peeled again. The sequence of rotating and peeling continues until all of the potato's skin is removed. In this work, we present a robotic system that can re-orient different vegetables using an Allegro hand in a way that their skin can be peeled using another manipulator. Our setup is shown in Figure 1 and Figure 2. In-hand rotation of vegetables is an instance of dexterous manipulation problem [1], a family of tasks that involves continuously controlling the force on an object while it is moving with respect to the fingertips [2, 3].


A rational decision making framework for inhibitory control

Shenoy, Pradeep, Yu, Angela J., Rao, Rajesh P.

Neural Information Processing Systems

Intelligent agents are often faced with the need to choose actions with uncertain consequences, and to modify those actions according to ongoing sensory processing and changing task demands. The requisite ability to dynamically modify or cancel planned actions is known as inhibitory control in psychology. We formalize inhibitory control as a rational decision-making problem, and apply to it to the classical stop-signal task. Using Bayesian inference and stochastic control tools, we show that the optimal policy systematically depends on various parameters of the problem, such as the relative costs of different action choices, the noise level of sensory inputs, and the dynamics of changing environmental demands. Our normative model accounts for a range of behavioral data in humans and animals in the stop-signal task, suggesting that the brain implements statistically optimal, dynamically adaptive, and reward-sensitive decision-making in the context of inhibitory control problems.